Autoencoding With a Classifier System
نویسندگان
چکیده
Autoencoders are data-specific compression algorithms learned automatically from examples. The predominant approach has been to construct single large global models that cover the domain. However, training and evaluating of increasing size comes at price additional time computational cost. Conditional computation, sparsity, model pruning techniques can reduce these costs while maintaining performance. Learning classifier systems (LCS) a framework for adaptively subdividing input spaces into an ensemble simpler local approximations together LCS perform conditional computation through use population individual gating/guarding components, each associated with approximation. This article explores decompose domain collection small autoencoders where solutions different complexity may emerge. In addition benefits in convergence cost, it is shown possible code as well resulting decoder cost when compared equivalent.
منابع مشابه
Autoencoding topology
The problem of learning a manifold structure on a dataset is framed in terms of a generative model, to which we use ideas behind autoencoders (namely adversarial/Wasserstein autoencoders) to fit deep neural networks. From a machine learning perspective, the resulting structure, an atlas of a manifold, may be viewed as a combination of dimensionality reduction and “fuzzy” clustering.
متن کاملRTCS: a Reactive with Tags Classifier System
In this work, a new Classifier System is proposed (CS). The system, a Reactive with Tags Classifier System (RTCS), is able to take into account environmental situations in intermediate decisions. CSs are special production systems, where conditions and actions are codified in order to learn new rules by means of Genetic Algorithms (GA). The RTCS has been designed to generate sequences of action...
متن کاملOptimal Binary Autoencoding with Pairwise Correlations
We formulate learning of a binary autoencoder as a biconvex optimization problem which learns from the pairwise correlations between encoded and decoded bits. Among all possible algorithms that use this information, ours finds the autoencoder that reconstructs its inputs with worst-case optimal loss. The optimal decoder is a single layer of artificial neurons, emerging entirely from the minimax...
متن کاملGrammatical Inference with Grammar-based Classifier System
This paper takes up the topic of a task of training Grammar-based Classifier System (GCS) to learn grammar from data. GCS is a new model of Learning Classifier Systems in which the population of classifiers has a form of a context-free grammar rule set in a Chomsky Normal Form. GCS has been proposed to address both the natural language grammar induction and also learning formal grammar for DNA ...
متن کاملImage Restoration using Autoencoding Priors
We propose to leverage denoising autoencoder networks as priors to address image restoration problems. We build on the key observation that the output of an optimal denoising autoencoder is a local mean of the true data density, and the autoencoder error (the difference between the output and input of the trained autoencoder) is a mean shift vector. We use the magnitude of this mean shift vecto...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Evolutionary Computation
سال: 2021
ISSN: ['1941-0026', '1089-778X']
DOI: https://doi.org/10.1109/tevc.2021.3079320